129 research outputs found
Towards Semantic KPI Measurement
Linked Data (LD) represent a great mechanism towards integrating information across disparate sources. The
respective technology can also be exploited to perform inferencing for deriving added-value knowledge. As
such, LD technology can really assist in performing various analysis tasks over information related to business
process execution. In the context of Business Process as a Service (BPaaS), the first real challenge is to
collect and link information originating from different systems by following a certain structure. As such, this
paper proposes two main ontologies that serve this purpose: a KPI and a Dependency one. Based on these
well-connected ontologies, an innovative Key Performance Indicator (KPI) analysis system is then built which
exhibits two main analysis capabilities: KPI assessment and drill-down, where the second can be exploited to
find root causes of KPI violations. Compared to other KPI analysis systems, LD usage enables the flexible
construction and assessment of any KPI kind allowing experts to better explore the possible KPI space
Sketching the vision of the Web of Debates
The exchange of comments, opinions, and arguments in blogs, forums, social media, wikis, and review websites has transformed the Web into a modern agora, a virtual place where all types of debates take place. This wealth of information remains mostly unexploited: due to its textual form, such information is difficult to automatically process and analyse in order to validate, evaluate, compare, combine with other types of information and make it actionable. Recent research in Machine Learning, Natural Language Processing, and Computational Argumentation has provided some solutions, which still cannot fully capture important aspects of online debates, such as various forms of unsound reasoning, arguments that do not follow a standard structure, information that is not explicitly expressed, and non-logical argumentation methods. Tackling these challenges would give immense added-value, as it would allow searching for, navigating through and analyzing online opinions and arguments, obtaining a better picture of the various debates for a well-intentioned user. Ultimately, it may lead to increased participation of Web users in democratic, dialogical interchange of arguments, more informed decisions by professionals and decision-makers, as well as to an easier identification of biased, misleading, or deceptive arguments. This paper presents the vision of the Web of Debates, a more human-centered version of the Web, which aims to unlock the potential of the abundance of argumentative information that currently exists online, offering its users a new generation of argument-based web services and tools that are tailored to their real needs
Leveraging Knowledge Graphs for Zero-Shot Object-agnostic State Classification
We investigate the problem of Object State Classification (OSC) as a
zero-shot learning problem. Specifically, we propose the first Object-agnostic
State Classification (OaSC) method that infers the state of a certain object
without relying on the knowledge or the estimation of the object class. In that
direction, we capitalize on Knowledge Graphs (KGs) for structuring and
organizing knowledge, which, in combination with visual information, enable the
inference of the states of objects in object/state pairs that have not been
encountered in the method's training set. A series of experiments investigate
the performance of the proposed method in various settings, against several
hypotheses and in comparison with state of the art approaches for object
attribute classification. The experimental results demonstrate that the
knowledge of an object class is not decisive for the prediction of its state.
Moreover, the proposed OaSC method outperforms existing methods in all datasets
and benchmarks by a great margin
A Cross-layer Monitoring Solution based on Quality Models
In order to implement cross-organizational workflows and to realize collaborations between small and medium
enterprises (SMEs), the use ofWeb service technology, Service-Oriented Architecture and Infrastructure-as-a-
Service (IaaS) has become a necessity. Based on these technologies, the need for monitoring the quality of (a)
the acquired resources, (b) the services offered to the final users and (c) the workflow-based procedures used
by SMEs in order to use services, has come to the fore. To tackle this need, we propose four metric Quality
Models that cover quality terms for the Workflow, Service and Infrastructure layers and an additional one for
expressing the equality and inter-dependency relations between the previous ones. To support these models
we have implemented a cross-layer monitoring system, whose main advantages are the layer-specific metric
aggregators and an event pattern discoverer for processing the monitoring log. Our evaluation is based on the
performance and accuracy aspects of the proposed cross-layer monitoring system
Theoretical Analysis and Implementation of Abstract Argumentation Frameworks with Domain Assignments
A representational limitation of current argumentation frameworks is their inability to deal with sets of entities and their properties, for example to express that an argument is applicable for a specific set of entities that have a certain property and not applicable for all the others. In order to address this limitation, we recently introduced Abstract Argumentation Frameworks with Domain Assignments (AAFDs), which extend Abstract Argumentation Frameworks (AAFs) by assigning to each argument a domain of application, i.e., a set of entities for which the argument is believed to apply. We provided formal definitions of AAFDs and their semantics, showed with examples how this model can support various features of commonsense and non-monotonic reasoning, and studied its relation to AAFs. In this paper, aiming to provide a deeper insight into this new model, we present more results on the relation between AAFDs and AAFs and the properties of the AAFD semantics, and we introduce an alternative, more expressive way to define the domains of arguments using logical predicates. We also offer an implementation of AAFDs based on Answer Set Programming (ASP) and evaluate it using a range of experiments with synthetic datasets
RDF Digest: Ontology Exploration Using Summaries
Abstract. Ontology summarization aspires to produce an abridged version of the original ontology that highlights its most representative concepts. In this paper, we present RDF Digest, a novel platform that automatically produces and visualizes summaries of RDF/S Knowledge Bases (KBs). A summary is a valid RDFS document/graph that includes the most representative concepts of the schema, adapted to the corresponding instances. To construct this graph our algorithm exploits the semantics and the structure of the schema and the distribution of the corresponding data/instances. A novel feature of our platform is that it allows summary exploration through extensible summaries. The aim of this demonstration is to dive in the exploration of the sources using summaries and to enhance the understanding of the various algorithms used. Introduction Given the explosive growth in both data size and schema complexity, data sources are becoming increasingly difficult to understand and use. Ontologies often have extremely complex schemas which are difficult to comprehend, limiting the exploration and the exploitation potential of the information they contain. Besides schema, the large amount of data in those sources increase the effort required for exploring them. Over the latest years, various techniques have been provided on constructing overviews on ontologies [1-4], maintaining however the more important ontology elements. These overviews are provided by means of an ontology summary. Ontology summarization [4] is defined as the process of distilling knowledge from an ontology in order to produce an abridged version. While summaries are useful, creating a "good" summary is a non-trivial task. A summary should be concise, yet it needs to convey enough information in order to enable a decent understanding of the original schema. Moreover, the summarization should be coherent and should provide an extensive coverage of the entire ontology. So far, although a reasonable number of research works tried to address the problem of summarization from different angles, a solution that simultaneously exploits the semantics of the schemas and the data instances is still missing. In this demonstration, we focus on RDF/S KBs and demonstrate for the first time the implementation of the algorithms introduced i
A dialogical model for collaborative decision making based on compromises
Abstract. In this paper, we deal with group decision making and propose a model of dialogue among agents that have different knowledge and preferences, but are willing to compromise in order to collaboratively reach a common decision. Agents participating in the dialogue use internal reasoning to resolve conflicts emerging in their knowledge during communication and to reach a decision that requires the least compromises. Our approach has significant potential, as it may allow targeted knowledge exchange, partial disclosure of information and efficient or informed decision-making depending on the topic of the agents' discussion
Ontology evolution: a process-centric survey
Ontology evolution aims at maintaining an ontology up to date with respect to changes in the domain that it models or novel requirements of information systems that it enables. The recent industrial adoption of Semantic Web techniques, which rely on ontologies, has led to the increased importance of the ontology evolution research. Typical approaches to ontology evolution are designed as multiple-stage processes combining techniques from a variety of fields (e.g., natural language processing and reasoning). However, the few existing surveys on this topic lack an in-depth analysis of the various stages of the ontology evolution process. This survey extends the literature by adopting a process-centric view of ontology evolution. Accordingly, we first provide an overall process model synthesized from an overview of the existing models in the literature. Then we survey the major approaches to each of the steps in this process and conclude on future challenges for techniques aiming to solve that particular stage
- …